Decision Tree Methods for Finding Reusable MDP Homomorphisms

نویسندگان

  • Alicia P. Wolfe
  • Andrew G. Barto
چکیده

State abstraction is a useful tool for agents interacting with complex environments. Good state abstractions are compact, reuseable, and easy to learn from sample data. This paper combines and extends two existing classes of state abstraction methods to achieve these criteria. The first class of methods search for MDP homomorphisms (Ravindran 2004), which produce models of reward and transition probabilities in an abstract state space. The second class of methods, like the UTree algorithm (McCallum 1995), learn compact models of the value function quickly from sample data. Models based on MDP homomorphisms can easily be extended such that they are usable across tasks with similar reward functions. However, value based methods like UTree cannot be extended in this fashion. We present results showing a new, combined algorithm that fulfills all three criteria: the resulting models are compact, can be learned quickly from sample data, and can be used across a class of reward functions.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improving UCT planning via approximate homomorphisms

In this paper we show how abstractions can help UCT’s performance. Ideal abstractions are homomorphisms because they preserve optimal policies, but they rarely exist, and are computationally hard to find even when they do. We show how a combination of (i) finding local abstractions in the layered-DAG MDP induced by a set of UCT trajectories (rather than finding abstractions in the global MDP), ...

متن کامل

Bounding Performance Loss in Approximate MDP Homomorphisms

We define a metric for measuring behavior similarity between states in a Markov decision process (MDP), which takes action similarity into account. We show that the kernel of our metric corresponds exactly to the classes of states defined by MDP homomorphisms (Ravindran & Barto, 2003). We prove that the difference in the optimal value function of different states can be upper-bounded by the val...

متن کامل

Hierarchical Solution of Large Markov Decision Processes

This paper presents an algorithm for finding approximately optimal policies in very large Markov decision processes by constructing a hierarchical model and then solving it. This strategy sacrifices optimality for the ability to address a large class of very large problems. Our algorithm works efficiently on enumerated-states and factored MDPs by constructing a hierarchical structure that is no...

متن کامل

Offline Monte Carlo Tree Search for Statistical Model Checking of Markov Decision Processes

To find the optimal policy for large Markov Decision Processes (MDPs), where state space explosion makes analytic methods infeasible, we turn to statistical methods. In this work, we apply Monte Carlo Tree Search to learning the optimal policy for a MDP with respect to a Probabilistic Bounded Linear Temporal Logic property. After we have the policy, we can proceed with statistical model checkin...

متن کامل

Piecewise Linear Value Function Approximation for Factored MDP

A number of proposals have been put forth in recent years for the solution of Markov decision processes (MDPs) whose state (and sometimes action) spaces are factored. One recent class of methods involves linear value function approximation, where the optimal value function is assumed to be a linear combination of some set of basis functions, with the aim of finding suitable weights. While sophi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006